Skip to content

[model_free_ptq] Enhance to work with previously quantized checkpoints like nvidia/DeepSeek-R1-NVFP4#2228

Merged
dsikka merged 63 commits intomainfrom
bdellabe/example-dsr1-nvfp4-fp8block
Mar 10, 2026
Merged

[model_free_ptq] Enhance to work with previously quantized checkpoints like nvidia/DeepSeek-R1-NVFP4#2228
dsikka merged 63 commits intomainfrom
bdellabe/example-dsr1-nvfp4-fp8block

Conversation

@brian-dellabetta
Copy link
Copy Markdown
Collaborator

@brian-dellabetta brian-dellabetta commented Jan 13, 2026

Prerequisites (tests will fail until merged):

SUMMARY:

This PR enhances the model_free_ptq entrypoint to work with previously quantized checkpoints. The added example extends the nvidia/DeepSeek-R1-NVFP4 nvfp4-quantized checkpoint to:

  • convert modelopt's NVFP4 format to create CT's, for corresponding mlp/expert layers.
  • quantize all compatible linear self_attn layers to FP8_BLOCK, including ones with shape not exactly divisible by block_size[1].
  • merge the two quantization_configs into a single compressed-tensors config in config.json "quantization_config"

Changes to src:

  • removes the targets must be "Linear" constraint from model_free_ptq, as it is no longer an issue in vllm.
  • import Converter abstraction from compressed-tensors convert_checkpoint entrypoint so that conversion from modelopt NVFP4 to CT format can happen at the same time as converting layers to some compressed form.
  • Some helper code moved to CT and imported here instead.

TEST PLAN:

  • Checkpoint (and script to run) available at https://huggingface.co/bdellabe/DeepSeek-R1-NVFP4-FP8-BLOCK. Works in vllm 0.15.1
  • Confirmed checkpoint is equivalent when running convert_checkpoint(..., converter=...) + model_free_ptq(..., converter=None) vs. model_free_ptq(..., converter=...)
  • Updated model_free_ptq tests to ensure they work when stacked (equivalently, when a CT checkpoint is used as the input model to model_free_ptq), that the quantization config is correct

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @brian-dellabetta, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new example script for applying model-free post-training quantization to the nvidia/DeepSeek-R1-NVFP4 model. The script specifically targets certain self_attn layers for FP8 block quantization, aiming to demonstrate the use of llmcompressor and compressed-tensors for model compression. The PR is currently a work in progress, with further integration steps planned.

Highlights

  • New Example Script: A new example script, dsr1_nvfp4_fp8_block.py, has been added to demonstrate model-free post-training quantization (PTQ) for the nvidia/DeepSeek-R1-NVFP4 model.
  • FP8 Block Quantization: The script applies FP8 BLOCK quantization to the weights and FP8 GROUP quantization to the input activations of specific self_attn linear layers (kv_b_proj, o_proj, q_a_proj, q_b_proj) within the DeepSeek-R1-NVFP4 model.
  • Quantization Configuration: The quantization scheme uses a block structure of [128, 128] for weights and a group size of 128 for input activations, with both being symmetric and float types.
  • Work in Progress: This pull request is marked as Work In Progress, with pending tasks including converting NVFP4 tensor packing order and merging quantization configurations into the config.json.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR adds a new example script for model-free PTQ on the nvidia/DeepSeek-R1-NVFP4 model. The script correctly sets up the quantization scheme to apply FP8-Block quantization to specific self-attention layers. My main feedback is focused on improving the clarity and readability of the layer selection logic. The current implementation uses a complex regex in the ignore list, which is not intuitive. I've suggested either using the more idiomatic targets list or, if that's not possible, significantly improving the comments to make the current approach easier to understand. As this is an example script, clarity is paramount.

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Jan 14, 2026

Documentation update

@mergify mergify bot added the documentation Improvements or additions to documentation label Jan 14, 2026
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Jan 15, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 15, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Jan 15, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 15, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Jan 15, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 16, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Jan 16, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 16, 2026
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 5, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Mar 5, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 5, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Mar 5, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 5, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Mar 5, 2026
brian-dellabetta and others added 4 commits March 5, 2026 22:02
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@brian-dellabetta brian-dellabetta added the ready When a PR is ready for review label Mar 5, 2026
@dsikka dsikka enabled auto-merge (squash) March 10, 2026 15:53
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@dsikka dsikka merged commit 370c04c into main Mar 10, 2026
13 checks passed
@dsikka dsikka deleted the bdellabe/example-dsr1-nvfp4-fp8block branch March 10, 2026 17:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation enhancement New feature or request ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants